18 research outputs found

    Static Analysis of Parity Games: Alternating Reachability Under Parity

    Get PDF
    It is well understood that solving parity games is equivalent, up to polynomial time, to model checking of the modal mu-calculus. It is a long-standing open problem whether solving parity games (or model checking modal mu-calculus formulas) can be done in polynomial time. A recent approach to studying this problem has been the design of partial solvers, algorithms that run in polynomial time and that may only solve parts of a parity game. Although it was shown that such partial solvers can completely solve many practical benchmarks, the design of such partial solvers was somewhat ad hoc, limiting a deeper understanding of the potential of that approach. We here mean to provide such robust foundations for deeper analysis through a new form of game, alternating reachability under parity. We prove the determinacy of these games and use this determinacy to define, for each player, a monotone fixed point over an ordered domain of height linear in the size of the parity game such that all nodes in its greatest fixed point are won by said player in the parity game. We show, through theoretical and experimental work, that such greatest fixed points and their computation leads to partial solvers that run in polynomial time. These partial solvers are based on established principles of static analysis and are more effective than partial solvers studied in extant work

    A domain equation for refinement of partial systems

    No full text
    Published versio

    Bounded analysis of constrained dynamical systems: a case study in nuclear arms control

    No full text
    We introduce a simple dynamical system that describes key features of a bilateral nuclear arms control regime. The evolution of each party's beliefs and declarations under the regime are represented, and the e ects of inspection processes are captured. Bounded analysis of this model allows us to explore { within a nite horizon { the consequences of changes to the rules of the arms control process and to the strategies of each party, bounded scope invariants for variables of interest, and dynamics for initial states containing strict uncertainty. Together these would potentially enable a decision support system to consider cases of interest irrespective of unknowns. We realize such abilities by building a Python package that draws on the capabilities of a Satis ability Modulo Theory (SMT) solver to explore particular scenarios and to optimize measures of interest { such as the belief of one nation in the statements made by another, or the timing of an unscheduled inspection such that it has maximum value. We show that these capabilities can in principle support the design or assessment of future bilateral arms control instruments by applying them to a set of representative and relevant test scenarios with realistic nite horizons

    An in-depth case study: modelling an information barrier with Bayesian Belief Networks

    No full text
    We present in detail a quantitative Bayesian Belief Network (BBN) model of the use of an information barrier system during a nuclear arms control inspection, and an analysis of this model using the capabilities of a Satis ability Modulo Theory (SMT) solver. Arms control veri cation processes do not in practice allow the parties involved to gather complete information about each other, and therefore any model we use must be able to cope with the limited information, subjective assessment and uncertainty in this domain. We have previously extended BBNs to allow this kind of uncertainty in parameter values (such as probabilities) to be re ected; these constrained BBNs (cBBNs) o er the potential for more robust modelling, which in that study we demonstrated with a simple information barrier model. We now present a much more detailed model of a similar veri cation process, based on the technical capabilities and deployment concept of the UK-Norway Initiative (UKNI) Information Barrier system, demonstrating the scalability of our previously-presented approach. We discuss facets of the model itself in detail, before analysing pertinent questions of interest to give examples of the power of this approach

    Proof of Kernel Work: a democratic low-energy consensus for distributed access-control protocols

    Get PDF
    We adjust the Proof of Work (PoW) consensus mechanism used in Bitcoin and Ethereum so that we can build on its strength while also addressing, in part, some of its perceived weaknesses. Notably, our work is motivated by the high energy consumption for mining PoW, and we want to restrict the use of PoW to a configurable, expected size of nodes, as a function of the local blockchain state. The approach we develop for this rests on three pillars: (i) Proof of Kernel Work (PoKW), a means of dynamically reducing the set of nodes that can participate in the solving of PoW puzzles such that an adversary cannot increase his attack surface because of such a reduction; (ii) Practical Adaptation of Existing Technology, a realization of this PoW reduction through an adaptation of existing blockchain and enterprise technology stacks; and (iii) Machine Learning for Adaptive System Resiliency, the use of techniques from artificial intelligence to make our approach adaptive to system, network and attack dynamics. We develop here, in detail, the first pillar and illustrate the second pillar through a real use case, a pilot project done with Porsche on controlling permissions to vehicle and data log accesses. We also discuss pertinent attack vectors for PoKW consensus and their mitigation. Moreover, we sketch how our approach may lead to more democratic PoKW-based blockchain systems for public networks that may inherit the resilience of blockchains based on PoW

    Partial solvers for parity games: effective polynomial-time composition

    No full text
    Partial methods play an important role in formal methods and beyond. Recently such methods were developed for parity games, where polynomial-time partial solvers decide the winners of a subset of nodes. We investigate here how effective polynomial-time partial solvers can be by studying interactions of partial solvers based on generic composition patterns that preserve polynomial-time computability. We show that use of such composition patterns discovers new partial solvers - including those that merge node sets that have the same but unknown winner - by studying games that composed partial solvers can neither solve nor simplify. We experimentally validate that this data-driven approach to refinement leads to polynomial-time partial solvers that can solve all standard benchmarks of structured games. For one of these polynomial-time partial solvers not even a sole random game from a few billion random games of varying configuration was found that it won't solve completely

    Confidence analysis for nuclear arms control: SMT abstractions of Game Theoretic Models

    No full text
    We consider the use of game theory in an arms control inspection planning scenario. Speci cally we develop a case study that games the number of inspections available against an ideal treaty length. Normal game theoretic techniques struggle to justify pay-o values to use for certain events, limiting the usefulness of such techniques. In order to improve the value of using game theory for decision making, we introduce a methodology for under-specifying the game theoretic models through a mixture of regression techniques and Satis ability Modulo Theory (SMT) constraint solving programs. Our approach allows a user to under-specify pay-o s in games, and to check, in a manner akin to robust optimisation, for how such under-speci cations a ect the `solution' of a game. We analyse the Nash equilibria and the mixed strategy sets that would lead to such equilibria - and explore how to maximise expected pay-o s and use of individual pure strategies for all possible values of an under-speci cation. Through this approach, we gain an insight into how - irrespective of uncertainty - we can still compute with game theoretic models, and present the types and kinds of analysis we can run that bene t from this uncertainty
    corecore